Stochastic Comparative Statics in Markov Decision Processes

نویسندگان

چکیده

In multiperiod stochastic optimization problems, the future optimal decision is a random variable whose distribution depends on parameters of problem. I analyze how expected value this changes as function dynamic in context Markov processes. call analysis comparative statics. derive both statics results and showing current decisions change response to single-period payoff function, discount factor, initial state system, transition probability function. apply my various models from economics operations research literature, including investment theory, pricing models, controlled walks, comparisons stationary distributions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Markov Decision Processes: Discrete Stochastic Dynamic Programming

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover...

متن کامل

Stochastic Dominance-Constrained Markov Decision Processes

We are interested in risk constraints for infinite horizon discrete time Markov decision processes (MDPs). Starting with average reward MDPs, we show that increasing concave stochastic dominance constraints on the empirical distribution of reward lead to linear constraints on occupation measures. An optimal policy for the resulting class of dominance-constrained MDPs is obtained by solving a li...

متن کامل

Online Learning in Stochastic Games and Markov Decision Processes

In their standard formulations, stochastic games and Markov decision processes assume a rational opponent or a stationary environment. Online learning algorithms can adapt to arbitrary opponents and non-stationary environments, but do not incorporate the dynamic structure of stochastic games or Markov decision processes. We survey recent approaches that apply online learning to dynamic environm...

متن کامل

Bounded Parameter Markov Decision Processes Bounded Parameter Markov Decision Processes

In this paper, we introduce the notion of a bounded parameter Markov decision process as a generalization of the traditional exact MDP. A bounded parameter MDP is a set of exact MDPs speciied by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). Bounded parameter MDPs can be used to represent variation or uncert...

متن کامل

Learning Qualitative Markov Decision Processes Learning Qualitative Markov Decision Processes

To navigate in natural environments, a robot must decide the best action to take according to its current situation and goal, a problem that can be represented as a Markov Decision Process (MDP). In general, it is assumed that a reasonable state representation and transition model can be provided by the user to the system. When dealing with complex domains, however, it is not always easy or pos...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Operations Research

سال: 2021

ISSN: ['0364-765X', '1526-5471']

DOI: https://doi.org/10.1287/moor.2020.1086